Research
Security News
Threat Actor Exposes Playbook for Exploiting npm to Build Blockchain-Powered Botnets
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.
@nlpjs/lang-es
Advanced tools
You can install @nlpjs/lang-es:
npm install @nlpjs/lang-es
Normalization of a text converts it to lowercase and remove decorations of characters.
const { NormalizerEs } = require('@nlpjs/lang-es');
const normalizer = new NormalizerEs();
const input = 'Esto debería ser normalizado';
const result = normalizer.normalize(input);
console.log(result);
// output: esto deberia ser normalizado
Tokenization splits a sentence into words.
const { TokenizerEs } = require('@nlpjs/lang-es');
const tokenizer = new TokenizerEs();
const input = "Esto debería ser tokenizado";
const result = tokenizer.tokenize(input);
console.log(result);
// output: [ 'Esto', 'debería', 'ser', 'tokenizado' ]
Tokenizer can also normalize the sentence before tokenizing, to do that provide a true as second argument to the method tokenize
const { TokenizerEs } = require('@nlpjs/lang-es');
const tokenizer = new TokenizerEs();
const input = "Esto debería ser tokenizado";
const result = tokenizer.tokenize(input, true);
console.log(result);
// output: [ 'esto', 'deberia', 'ser', 'tokenizado' ]
Using the class StopwordsEs you can identify if a word is an stopword:
const { StopwordsEs } = require('@nlpjs/lang-es');
const stopwords = new StopwordsEs();
console.log(stopwords.isStopword('un'));
// output: true
console.log(stopwords.isStopword('desarrollador'));
// output: false
Using the class StopwordsEs you can remove stopwords form an array of words:
const { StopwordsEs } = require('@nlpjs/lang-es');
const stopwords = new StopwordsEs();
console.log(stopwords.removeStopwords(['he', 'visto', 'a', 'un', 'programador']));
// output: ['he', 'visto', 'programador']
Using the class StopwordsEs you can restart it dictionary and build it from another set of words:
const { StopwordsEs } = require('@nlpjs/lang-es');
const stopwords = new StopwordsEs();
stopwords.dictionary = {};
stopwords.build(['he', 'visto']);
console.log(stopwords.removeStopwords(['he', 'visto', 'a', 'un', 'programador']));
// output: ['a', 'un', 'programador']
An stemmer is an algorithm to calculate the stem (root) of a word, removing affixes.
You can stem one word using method stemWord:
const { StemmerEs } = require('@nlpjs/lang-es');
const stemmer = new StemmerEs();
const input = 'programador';
console.log(stemmer.stemWord(input));
// output: program
You can stem an array of words using method stem:
const { StemmerEs } = require('@nlpjs/lang-es');
const stemmer = new StemmerEs();
const input = ['he', 'visto', 'a', 'un', 'programador'];
console.log(stemmer.stem(input));
// outuput: [ 'hab', 'vist', 'a', 'un', 'program' ]
As you can see, stemmer does not do internal normalization, so words with uppercases will remain uppercased. Also, stemmer works with lowercased affixes, so programador will be stemmed as program but PROGRAMADOR will not be changed.
You can tokenize and stem a sentence, including normalization, with the method tokenizeAndStem:
const { StemmerEs } = require('@nlpjs/lang-es');
const stemmer = new StemmerEs();
const input = 'He visto a un PROGRAMADOR';
console.log(stemmer.tokenizeAndStem(input));
// output: [ 'hab', 'vist', 'a', 'un', 'program' ]
When calling tokenizeAndStem method from the class StemmerES, the second parameter is a boolean to set if the stemmer must keep the stopwords (true) or remove them (false). Before using it, the stopwords instance must be set into the stemmer:
const { StemmerEs, StopwordsEs } = require('@nlpjs/lang-es');
const stemmer = new StemmerEs();
stemmer.stopwords = new StopwordsEs();
const input = 'he visto a un programador';
console.log(stemmer.tokenizeAndStem(input, false));
// output: ['hab', 'vist', 'program']
To use sentiment analysis you'll need to create a new Container and use the plugin LangES, because internally the SentimentAnalyzer class try to retrieve the normalizer, tokenizer, stemmmer and sentiment dictionaries from the container.
const { Container } = require('@nlpjs/core');
const { SentimentAnalyzer } = require('@nlpjs/sentiment');
const { LangEs } = require('@nlpjs/lang-es');
(async () => {
const container = new Container();
container.use(LangEs);
const sentiment = new SentimentAnalyzer({ container });
const result = await sentiment.process({ locale: 'es', text: 'me gustan los gatos' });
console.log(result.sentiment);
})();
// output:
// {
// score: 0.266,
// numWords: 4,
// numHits: 1,
// average: 0.0665,
// type: 'senticon',
// locale: 'es',
// vote: 'positive'
// }
The output of the sentiment analysis includes:
const { containerBootstrap } = require('@nlpjs/core');
const { Nlp } = require('@nlpjs/nlp');
const { LangEs } = require('@nlpjs/lang-es');
(async () => {
const container = await containerBootstrap();
container.use(Nlp);
container.use(LangEs);
const nlp = container.get('nlp');
nlp.settings.autoSave = false;
nlp.addLanguage('es');
// Adds the utterances and intents for the NLP
nlp.addDocument('es', 'adios por ahora', 'greetings.bye');
nlp.addDocument('es', 'adios y ten cuidado', 'greetings.bye');
nlp.addDocument('es', 'muy bien nos vemos luego', 'greetings.bye');
nlp.addDocument('es', 'debo irme', 'greetings.bye');
nlp.addDocument('es', 'hola', 'greetings.hello');
// Train also the NLG
nlp.addAnswer('es', 'greetings.bye', 'hasta la proxima');
nlp.addAnswer('es', 'greetings.bye', '¡te veo pronto!');
nlp.addAnswer('es', 'greetings.hello', '¡hola que tal!');
nlp.addAnswer('es', 'greetings.hello', '¡salludos!');
await nlp.train();
const response = await nlp.process('es', 'debo irme');
console.log(response);
})();
You can read the guide of how to contribute at Contributing.
Made with contributors-img.
You can read the Code of Conduct at Code of Conduct.
?
This project is developed by AXA Group Operations Spain S.A.
If you need to contact us, you can do it at the email opensource@axa.com
Copyright (c) AXA Group Operations Spain S.A.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
FAQs
Core
The npm package @nlpjs/lang-es receives a total of 5,614 weekly downloads. As such, @nlpjs/lang-es popularity was classified as popular.
We found that @nlpjs/lang-es demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.
Security News
NVD’s backlog surpasses 20,000 CVEs as analysis slows and NIST announces new system updates to address ongoing delays.
Security News
Research
A malicious npm package disguised as a WhatsApp client is exploiting authentication flows with a remote kill switch to exfiltrate data and destroy files.